11 research outputs found

    Stochastische Mustererkennung zur Bildsegmentierung = Stochastic pattern recognition for image segmentation

    Get PDF
    Luftbilder zeigen häufig weite homogene Bereiche mit inhomogener Textur. Das sind Bereiche, die zwar lokal eine hohe Schwankung in den Farbwerten aufweisen, die jedoch stochastisch ein augenscheinlich so regelmäßiges Muster zeigen, dass man sie als Mensch leicht als zusammengehörige Fläche wie einen Wald, ein Feld, einen Weg oder ein Gewässer erkennt. Mit der vorliegenden Arbeit sollen solche Bereiche erkannt werden. Dazu werden zunächst Texturmodelle zu verschiedenen Bildbereichen generiert, die strukturelle Ähnlichkeit ermittelt und hinreichend unterschiedliche Texturmodelle gespeichert. Anschließend wird jedes Pixel des Bildes via Struktur-Metrik einem der gefundenen Texturmodelle zugeordnet und entsprechend charakteristisch eingefärbt. Dabei werden drei unterschiedliche Ansätze zur Texturmodellierung untersucht und miteinander verglichen im Hinblick auf das Ziel, möglichst viele Bildbereiche als gleichartig zu segmentieren, die von einem Menschen als ähnlich angesehen werden (z.B. alle Baumkronen im Bild, Wege oder Äcker) und Segmentgrenzen dort zu erzielen, wo augenscheinliche Übergänge sind (z.B.Waldgrenzen, Auto auf einer Wiese, Fahrzeugspuren im Gelände, Vegetationsveränderungen)

    Leistungsstarke und effiziente Bildinterpolation

    Get PDF

    Compressive sensing reconstruction of 3D wet refractivity based on GNSS and InSAR observations

    Get PDF
    In this work, the reconstruction quality of an approach for neutrospheric water vapor tomography based on Slant Wet Delays (SWDs) obtained from Global Navigation Satellite Systems (GNSS) and Interferometric Synthetic Aperture Radar (InSAR) is investigated. The novelties of this approach are (1) the use of both absolute GNSS and absolute InSAR SWDs for tomography and (2) the solution of the tomographic system by means of compressive sensing (CS). The tomographic reconstruction is performed based on (i) a synthetic SWD dataset generated using wet refractivity information from the Weather Research and Forecasting (WRF) model and (ii) a real dataset using GNSS and InSAR SWDs. Thus, the validation of the achieved results focuses (i) on a comparison of the refractivity estimates with the input WRF refractivities and (ii) on radiosonde profiles. In case of the synthetic dataset, the results show that the CS approach yields a more accurate and more precise solution than least squares (LSQ). In addition, the benefit of adding synthetic InSAR SWDs into the tomographic system is analyzed. When applying CS, adding synthetic InSAR SWDs into the tomographic system improves the solution both in magnitude and in scattering. When solving the tomographic system by means of LSQ, no clear behavior is observed. In case of the real dataset, the estimated refractivities of both methodologies show a consistent behavior although the LSQ and CS solution strategies differ

    Calibration and synchronization of geospatial metadata for aerial sensor systems

    No full text
    Given an aerial imagery showing an interesting object one often asks for the exact location of the object in the world. There are several approaches to answer the question but one of the easiest and computational cheapest approach is to use geospatial metadata like from a global navigation satellite system (GNSS) and inertial navigation system (INS) together with a digital elevation model (DEM) of the observed area to estimate the target location. The quality of the result depends greatly on the precision of the metadata and the accuracy of the synchronization of the metadata to the single imagery frame. This paper discusses how to quantitatively describe the accuracy of the metadata of an aerial motion imagery system. The aim is to have this information available for information fusion that improve the object location as presumed by the metadata with information from image recognition algorithms

    Evaluation of georeferencing methods with respect to their suitability to address unsimilarity between the image to be referenced and the reference image

    No full text
    In recent years, operational costs of unmanned aircraft systems (UAS) have been massively decreasing. New sensors satisfying weight and size restrictions of even small UAS cover many different spectral ranges and spatial resolutions. This results in airborne imagery having become more and more available. Such imagery is used to address many different tasks in various fields of application. For many of those tasks, not only the content of the imagery itself is of interest, but also its spatial location. This requires the imagery to be properly georeferenced. Many UAS have an integrated GPS receiver together with some kind of INS device acquiring the sensor orientation to provide the georeference. However, both GPS and INS data can easily become unavailable for a period of time during a flight, e.g. due to sensor malfunction, transmission problems or jamming. Imagery gathered during such times lacks georeference. Moreover, even in datasets not affected by such problems, GPS and INS inaccuracies together with a potentially poor knowledge of ground elevation can render location information accuracy less than sufficient for a given task. To provide or improve the georeference of an image affected by this, an image to reference registration can be performed if a suitable reference is available, e.g. a georeferenced orthophoto covering the area of the image to be georeferenced. Registration and thus georeferencing is achieved by determining a transformation between the image to be referenced and the reference which maximizes the coincidence of relevant structures present both in the former and the latter. Many methods have been developed to accomplish this task. Regardless of their differences they usually tend to perform the better the more similar an image and a reference are in appearance. This contribution evaluates a selection of such methods all differing in the type of structure they use for the assessment of coincidence with respect to their ability to tolerate unsimilarity in appearance. Similarity in appearance is mainly dependent on the following aspects, namely the similarity of abstraction levels (Is the reference e.g. an orthophoto or a topographical map?), the similarity of sensor types and spectral bands (Is the image e.g. a SAR image and the reference a passively sensed one? Was e.g. a NIR sensor used capturing the image while a VIS sensor was used in the reference?), the similarity of resolutions (Is the ground sampling distance of the reference comparable to the one of the image?), the similarity of capture parameters (Are e.g. the viewing angles comparable in the image and in the reference?) and the similarity concerning the image content (Was there e.g. snow coverage present when the image was captured while this was not the case when the reference was captured?). The evaluation is done by determining the performance of each method with a set of image to be referenced and reference pairs representing various degrees of unsimilarity with respect to each of the above mentioned aspects of similarity

    Robust drone detection with static VIS and SWIR cameras for day and night counter-UAV

    No full text
    Considerable progress with unmanned aerial vehicles (UAVs) has led to an increasing need for counter-UAV systems to detect present, potentially threatening or misused drones. Therefore, a UAV detection algorithm has been developped recently for day and night operation. Whereas high resolution VIS cameras enable to detect UAVs in daylight in further distances, surveillance at night is performed with a short wave infrared (SWIR) camera. The proposed method is based on temporal median computation, structural adaptive image differencing, codebook-based background learning, local density computation, and shape analysis of foreground structures to perform an improved near range change detection for UAVs. Areas with moving scene parts, like leaves in the wind or driving cars on a street, are recognized to minimize false alarms. This paper presents a significant improvement with respect to some of the most challenging tasks in this field, e.g., increasing the UAV detection sensitivity in front of trees with waving leaves, false alarm minimization, and avoiding the background model update problem. The provided results illustrate the reached performance in a variety of different situations

    Safety Assessment of Windshield Washing Technologies

    No full text
    We present a new assessment method for driver visibility based on reaction time measurement and workload in real driving situations from the most relevant accident scenario involving pedestrians. The procedure was validated in a balanced trial to compare a wet flatblade windshield washing system to a conventional Fluidic nozzles system. The test cohort comprised 204 subjects who form a representative sample of German driving license holders. The average reaction time gain of wet flatblade over Fluidic nozzles is 315 ms for pedestrian detection and 270 ms for the recognition of critical traffic situations

    Safety assessment of windshield washing Technologies

    No full text
    We present a new assessment method for driver visibility based on reaction time measurement and workload in real driving situations from the most relevant accident scenario involving pedestrians. The procedure was validated in a balanced trial to compare a wet flatblade windshield washing system to a conventional Fluidic nozzles system. The test cohort comprised 204 subjects who form a representative sample of German driving license holders. The average reaction time gain of wet flatblade over Fluidic nozzles is 315 ms for pedestrian detection and 270 ms for the recognition of critical traffic situations

    Video change detection for fixed wing UAVs

    No full text
    In this paper we proceed the work of Bartelsen et al.1 We present the draft of a process chain for an image based change detection which is designed for videos acquired by fixed wing unmanned aerial vehicles (UAVs). From our point of view, automatic video change detection for aerial images can be useful to recognize functional activities which are typically caused by the deployment of improvised explosive devices (IEDs), e.g. excavations, skid marks, footprints, left-behind tooling equipment, and marker stones. Furthermore, in case of natural disasters, like flooding, imminent danger can be recognized quickly. Due to the necessary flight range, we concentrate on fixed wing UAVs. Automatic change detection can be reduced to a comparatively simple photogrammetric problem when the perspective change between the "before" and "after" image sets is kept as small as possible. Therefore, the aerial image acquisition demands a mission planning with a clear purpose including flight path and sensor configuration. While the latter can be enabled simply by a fixed and meaningful adjustment of the camera, ensuring a small perspective change for "before" and "after" videos acquired by fixed wing UAVs is a challenging problem. Concerning this matter, we have performed tests with an advanced commercial off the shelf (COTS) system which comprises a differential GPS and autopilot system estimating the repetition accuracy of its trajectory. Although several similar approaches have been presented,23 as far as we are able to judge, the limits for this important issue are not estimated so far. Furthermore, we design a process chain to enable the practical utilization of video change detection. It consists of a front-end of a database to handle large amounts of video data, an image processing and change detection implementation, and the visualization of the results. We apply our process chain on the real video data acquired by the advanced COTS fixed wing UAV and synthetic data. For the image processing and change detection, we use the approach of M uller.4 Although it was developed for unmanned ground vehicles (UGVs), it enables a near real time video change detection for aerial videos. Concluding, we discuss the demands on sensor systems in the matter of change detection
    corecore